Close

1. Identity statement
Reference TypeConference Paper (Conference Proceedings)
Sitesibgrapi.sid.inpe.br
Holder Codeibi 8JMKD3MGPEW34M/46T9EHH
Identifier8JMKD3MGPAW/3PJ97CE
Repositorysid.inpe.br/sibgrapi/2017/09.05.16.05
Last Update2017:09.05.16.05.38 (UTC) administrator
Metadata Repositorysid.inpe.br/sibgrapi/2017/09.05.16.05.38
Metadata Last Update2022:05.18.22.18.24 (UTC) administrator
Citation KeyMesquita:2017:ViSeOb
TitleVisual Search for Object Instances Guided by Visual Attention Algorithms
FormatOn-line
Year2017
Access Date2024, Apr. 29
Number of Files1
Size1905 KiB
2. Context
AuthorMesquita, Rafael Galvão de
AffiliationUniversidade Federal de Pernambuco
EditorTorchelsen, Rafael Piccin
Nascimento, Erickson Rangel do
Panozzo, Daniele
Liu, Zicheng
Farias, Mylène
Viera, Thales
Sacht, Leonardo
Ferreira, Nivan
Comba, João Luiz Dihl
Hirata, Nina
Schiavon Porto, Marcelo
Vital, Creto
Pagot, Christian Azambuja
Petronetto, Fabiano
Clua, Esteban
Cardeal, Flávio
e-Mail Addressrgm@cin.ufpe.br
Conference NameConference on Graphics, Patterns and Images, 30 (SIBGRAPI)
Conference LocationNiterói, RJ, Brazil
Date17-20 Oct. 2017
PublisherSociedade Brasileira de Computação
Publisher CityPorto Alegre
Book TitleProceedings
Tertiary TypeMaster's or Doctoral Work
History (UTC)2017-09-05 16:05:38 :: rgm@cin.ufpe.br -> administrator ::
2022-05-18 22:18:24 :: administrator -> :: 2017
3. Content and structure
Is the master or a copy?is the master
Content Stagecompleted
Transferable1
KeywordsVisual search. saliency detection. visual attention. object recognition. local feature detectors/descriptors. matching
AbstractVisual attention is the process by which the human brain prioritizes and controls visual stimuli and it is, among other characteristics of the visual system, responsible for the fast way in which human beings interact with the environment, even considering a large amount of information to be processed. Visual attention can be driven by a bottom-up mechanism, in which low level stimuli of the analysed scene, like color, guides the focused region to salient regions (regions that are distinguished from its neighborhood or from the whole scene); or by a top-down mechanism, in which cognitive factors, like expectations or the goal of concluding certain task, define the attended location. This Thesis investigates the use of visual attention algorithms to guide (and to accelerate) the search for objects in digital images. Inspired by the bottom-up mechanism, a saliency detector based on the estimative of the scenes background combined with the result of a Laplacian-based operator, referred as BLS (Background Laplacian Saliency), is proposed. Moreover, a modification in SURF (Speeded-Up Robust Features) local feature detector/descriptor, named as patch-based SURF, is designed so that the recognition occurs iteratively in each focused location of the scene, instead of performing the classical recognition (classic search), in which the whole scene is analysed at once. The search mode in which the patch-based SURF is applied and the order of the regions of the image to be analysed is defined by a saliency detection algorithm is called BGMS. The BLS and nine other state-of-the-art saliency detection algorithms are experimented in the BGMS. Results indicate, in average, a reduction to (i) 73% of the classic search processing time just by applying patch-based SURF in a random search, (ii) and to 53% of this time when the search is guided by BLS. When using other state-of-the-art saliency detection algorithms, between 55% and 133% of the processing time of the classic search is needed to perform recognition. Moreover, inspired by the top-down mechanism, it is proposed the BGCO, in which the visual search occurs by prioritizing scene descriptors according to its Hamming distance to the descriptors of a given target object. The BGCO uses Bloom filters to represent feature vectors that are similar to the descriptors of the searched object and it has constant space and time complexity in relation to the number of elements in the set of the descriptors of the target. Experiments showed a reduction in the processing time to 80% of the required time when the classic search is performed. Finally, by using the BGMS and the BGCO in an integrated way, the processing time of the search was reduced to 44% of the execution time required by the classic search.
Arrangementurlib.net > SDLA > Fonds > SIBGRAPI 2017 > Visual Search for...
doc Directory Contentaccess
source Directory Contentthere are no files
agreement Directory Content
agreement.html 05/09/2017 13:05 1.2 KiB 
4. Conditions of access and use
data URLhttp://urlib.net/ibi/8JMKD3MGPAW/3PJ97CE
zipped data URLhttp://urlib.net/zip/8JMKD3MGPAW/3PJ97CE
Languageen
Target FileMesquitaMello_final.pdf
User Grouprgm@cin.ufpe.br
Visibilityshown
Update Permissionnot transferred
5. Allied materials
Mirror Repositorysid.inpe.br/banon/2001/03.30.15.38.24
Next Higher Units8JMKD3MGPAW/3PKCC58
Citing Item Listsid.inpe.br/sibgrapi/2017/09.12.13.04 5
Host Collectionsid.inpe.br/banon/2001/03.30.15.38
6. Notes
Empty Fieldsarchivingpolicy archivist area callnumber contenttype copyholder copyright creatorhistory descriptionlevel dissemination doi edition electronicmailaddress group isbn issn label lineage mark nextedition notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder schedulinginformation secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url versiontype volume


Close